
Rank Reduction Autoencoder for parameterization of eigenvalue problems
Please login to view abstract download link
Parameterization of eigenvectors and eigenvalues is a highly complex problem. Several works are based on model reduction techniques to parameterize the eigenvectors, although these are not entirely effective due to the complexity of the vectors to be approximated in a parameter space. In addition to the above, the reduced models must be constructed by system eigenmode, where special care must be taken in the correct pairing of the eigenvectors, which may be disordered or have changed sign. From the above, in this paper we propose a matching technique based on the inverse power method, in order to match and align all the eigenvectors of a data set, and then we apply a nonlinear model reduction technique called Rank Reduction Autoencoder (RRAE) initially proposed in [1] to parameterize the eigenvectors and eigenvalues of the system. The main idea of the RRAE consists in constructing an Autoencoder by imposing that its latent space is expressed as a low-rank decomposition. This property allows to find a latent space in which all kinds of linear model reduction techniques can be applied, allowing the development of powerful parametric surrogates. The efficiency of the model is illustrated for the 1D and 2D eigenvector reconstruction of solid mechanics problems over a parametric domain.